On the Equivalence of the SMO and MDM Algorithms for SVM Training

نویسندگان

  • Jorge López Lázaro
  • Álvaro Barbero Jiménez
  • José R. Dorronsoro
چکیده

SVMtraining is usually discussed under twodifferent algorithmic points of view. The first one is provided by decomposition methods such as SMO and SVMLight while the second one encompasses geometric methods that try to solve a Nearest Point Problem (NPP), the Gilbert– Schlesinger–Kozinec (GSK) andMitchell–Demyanov–Malozemov (MDM) algorithms being the most representative ones. In this work we will show that, indeed, both approaches are essentially coincident.More precisely, we will show that a slight modification of SMO in which at each iteration both updating multipliers correspond to patterns in the same class solves NPP and, moreover, that this modification coincides with an extended MDM algorithm. Besides this, we also propose a new way to apply the MDM algorithm for NPP problems over reduced convex hulls.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An accelerated MDM algorithm for SVM training

In this work we will propose an acceleration procedure for the Mitchell–Demyanov–Malozemov (MDM) algorithm (a fast geometric algorithm for SVM construction) that may yield quite large training savings. While decomposition algorithms such as SVMLight or SMO are usually the SVM methods of choice, we shall show that there is a relationship between SMO and MDM that suggests that, at least in their ...

متن کامل

Comparison of different algorithms for land use mapping in dry climate using satellite images: a case study of the Central regions of Iran

The objective of this research was to determine the best model and compare performances in terms of producing landuse maps from six supervised classification algorithms. As a result, different algorithms such as the minimum distance ofmean (MDM), Mahalanobis distance (MD), maximum likelihood (ML), artificial neural network (ANN), spectral anglemapper (SAM), and support vector machine (SVM) were...

متن کامل

Heuristics for Improving the Performance of Online SVM Training Algorithms

Recently, the sequential minimal optimization algorithm (SMO) was introduced [1, 2] as an effective method for training support vector machines (SVMs) on classification tasks defined on sparse data sets. SMO differs from most SVM algorithms in that it does not require a quadratic programming (QP) solver. One problem with SMO is that its rate of convergence slows down dramatically when data is n...

متن کامل

SMO-Style Algorithms for Learning Using Privileged Information

Recently Vapnik et al. [11, 12, 13] introduced a new learning model, called Learning Using Privileged Information (LUPI). In this model, along with standard training data, the teacher supplies the student with additional (privileged) information. In the optimistic case, the LUPI model can improve the bound for the probability of test error from O(1/ √ n) to O(1/n), where n is the number of trai...

متن کامل

A New Strategy for Selecting Working Sets Applied in SMO

At present sequential minimal optimization (SMO) is one of the most popular and efficient training algorithms for support vector machines (SVM), especially for largescale problems. A novel strategy for selecting working sets applied in SMO is presented in the paper. Based on the original feasible direction method, the new strategy also takes the efficiency of kernel cache maintained in SMO into...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008